Experimental setting:
- Select more than 3 popular benchmark datasets. Most papers perform evaluation on 3~5 datasets.
- Determine the training/testing split. By default, strictly follow the setting in recent papers. If their settings are not applicable, we can determine our own setting reasonably and fully describe the details in the paper.
Experimental implementation:
- For the commonly used model, avoid unnecessary reimplementation. Just use the released code and model.
- If closely related papers release their code, modify based on their code and avoid starting from scratch.
- Document your code with readable and meaningful comments. Otherwise, even your cannot understand your own code after a long time.
- Manage different versions of code carefully, otherwise your code will get messy very rapidly. Use advanced tool such as github to create branches, revert to old version, etc. The brutal approach is to create an individual folder for each version, which is not recommended.
Experimental running:
- Avoid redundant or unncessary experiments.
- Run experiments parallelly using all available computing resources.
- Tune hyper-parameters wisely.
- Save final output and important intermediate output for future analysis.
Experimental record:
- Make a to-do list for what experiments you are going to do.
- For the experiments you have done, summarize your observation and conclusion. Then, adjust the remaining to-do list accordingly.